Learning from correlated patterns by simple perceptrons
نویسندگان
چکیده
منابع مشابه
Learning of correlated patterns by simple perceptrons
Learning behavior of simple perceptrons is analyzed for a teacher-student scenario in which output labels are provided by a teacher network for a set of possibly correlated input patterns, and such that teacher and student networks are of the same type. Our main concern is the effect of statistical correlations among the input patterns on learning performance. For this purpose, we extend to the...
متن کاملLearning dynamics of simple perceptrons with non-extensive cost functions.
A Tsallis-statistics-based generalization of the gradient descent dynamics (using non- extensive cost functions), recently introduced by one of us, is proposed as a learning rule in a simple perceptron. The resulting Langevin equations are solved numerically for different values of an index q (q = 1 and q ≠ 1 respectively correspond to the extensive and non-extensive cases) and for different co...
متن کاملLearning algorithms for perceptrons from statistical physics
Learning algorithms for perceptrons are deduced from statistical mechanics. Thermodynamical quantities are used as cost functions which may be extremal12ed by gradient dynamics to find the synaptic efficacies that store the learning set of patterns. The learning rules so obtained are classified in two categories, following the statistics used to derive the cost functions, namely, Bolt2mann stat...
متن کاملLearning Sparse Perceptrons
We introduce a new algorithm designed to learn sparse perceptrons over input representations which include high-order features. Our algorithm, which is based on a hypothesis-boosting method, is able to PAC-learn a relatively natural class of target concepts. Moreover, the algorithm appears to work well in practice: on a set of three problem domains, the algorithm produces classifiers that utili...
متن کاملMultilayer Perceptrons May Learn Simple Rules Quickly
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units. For large K, the scale of the learning curve strongly depends on the target rule. When learning a perceptron, the sample size P needed for optimal generalization scales so that N P KN, where N is the dimension of the input. This even holds for a noisy perceptron rule if a new input is classiied ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Physics A: Mathematical and Theoretical
سال: 2008
ISSN: 1751-8113,1751-8121
DOI: 10.1088/1751-8113/42/1/015005